7 research outputs found

    A Review on the Applications of Crowdsourcing in Human Pathology

    Full text link
    The advent of the digital pathology has introduced new avenues of diagnostic medicine. Among them, crowdsourcing has attracted researchers' attention in the recent years, allowing them to engage thousands of untrained individuals in research and diagnosis. While there exist several articles in this regard, prior works have not collectively documented them. We, therefore, aim to review the applications of crowdsourcing in human pathology in a semi-systematic manner. We firstly, introduce a novel method to do a systematic search of the literature. Utilizing this method, we, then, collect hundreds of articles and screen them against a pre-defined set of criteria. Furthermore, we crowdsource part of the screening process, to examine another potential application of crowdsourcing. Finally, we review the selected articles and characterize the prior uses of crowdsourcing in pathology

    Kartta Labs: Collaborative Time Travel

    Get PDF
    We introduce the modular and scalable design of Kartta Labs, an open source, open data, and scalable system for virtually reconstructing cities from historical maps and photos. Kartta Labs relies on crowdsourcing and artificial intelligence consisting of two major modules: Maps and 3D models. Each module, in turn, consists of sub-modules that enable the system to reconstruct a city from historical maps and photos. The result is a spatiotemporal reference that can be used to integrate various collected data (curated, sensed, or crowdsourced) for research, education, and entertainment purposes. The system empowers the users to experience collaborative time travel such that they work together to reconstruct the past and experience it on an open source and open data platform

    Two-Step Active Learning for Instance Segmentation with Uncertainty and Diversity Sampling

    Full text link
    Training high-quality instance segmentation models requires an abundance of labeled images with instance masks and classifications, which is often expensive to procure. Active learning addresses this challenge by striving for optimum performance with minimal labeling cost by selecting the most informative and representative images for labeling. Despite its potential, active learning has been less explored in instance segmentation compared to other tasks like image classification, which require less labeling. In this study, we propose a post-hoc active learning algorithm that integrates uncertainty-based sampling with diversity-based sampling. Our proposed algorithm is not only simple and easy to implement, but it also delivers superior performance on various datasets. Its practical application is demonstrated on a real-world overhead imagery dataset, where it increases the labeling efficiency fivefold.Comment: UNCV ICCV 202

    Agile Modeling: From Concept to Classifier in Minutes

    Full text link
    The application of computer vision to nuanced subjective use cases is growing. While crowdsourcing has served the vision community well for most objective tasks (such as labeling a "zebra"), it now falters on tasks where there is substantial subjectivity in the concept (such as identifying "gourmet tuna"). However, empowering any user to develop a classifier for their concept is technically difficult: users are neither machine learning experts, nor have the patience to label thousands of examples. In reaction, we introduce the problem of Agile Modeling: the process of turning any subjective visual concept into a computer vision model through a real-time user-in-the-loop interactions. We instantiate an Agile Modeling prototype for image classification and show through a user study (N=14) that users can create classifiers with minimal effort under 30 minutes. We compare this user driven process with the traditional crowdsourcing paradigm and find that the crowd's notion often differs from that of the user's, especially as the concepts become more subjective. Finally, we scale our experiments with simulations of users training classifiers for ImageNet21k categories to further demonstrate the efficacy

    Lagrangian flow measurements and observations of the 2015 Chilean tsunami in Ventura, CA

    No full text
    Summarization: Tsunami-induced coastal currents are spectacular examples of nonlinear and chaotic phenomena. Due to their long periods, tsunamis transport substantial energy into coastal waters, and as this energy interacts with the ubiquitous irregularity of bathymetry, shear and turbulent features appear. The oscillatory character of a tsunami wave train leads to flow reversals, which in principle can spawn persistent turbulent coherent structures (e.g., large vortices or “whirlpools”) that can dominate damage and transport potential. However, no quantitative measurements exist to provide physical insight into this kind of turbulent variability, and no motion recordings are available to help elucidate how these vortical structures evolve and terminate. We report our measurements of currents in Ventura Harbor, California, generated by the 2015 Chilean M8.3 earthquake. We measured surface velocities using GPS drifters and image sequences of surface tracers deployed at a channel bifurcation, as the event unfolded. From the maps of the flow field, we find that a tsunami with a near-shore amplitude of 30 cm at 6 m depth produced unexpectedly large currents up to 1.5 m/s, which is a fourfold increase over what simple linear scaling would suggest. Coherent turbulent structures appear throughout the event, across a wide range of scales, often generating the greatest local currents.Presented on: Geophysical Research Letter
    corecore